-
Notifications
You must be signed in to change notification settings - Fork 42
fix: lock langchain version #391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdds a new abstract middleware type and extensive test utilities for LangChain decorators, and pins LangChain-related dependencies to exact versions in two package.json files. (50 words) Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @akitaSummer, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on improving the stability and predictability of the project's Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to lock the versions of langchain dependencies to ensure stability. The changes correctly pin the versions for @langchain/core, @langchain/langgraph, and langchain in two package.json files. My review includes a critical fix for a typo in a version string that would break dependency installation. I've also included suggestions to improve consistency by pinning other related @langchain/* dependencies that are currently using version ranges. This would fully align the changes with the stated goal of the pull request.
plugin/langchain/package.json
Outdated
| "@langchain/community": "^1.0.0", | ||
| "@langchain/core": "^1.1.1", | ||
| "@langchain/langgraph": "^1.0.2", | ||
| "@langchain/core": "^.1.11", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There seems to be a typo in the version for @langchain/core. The value ^.1.11 is not a valid semver range and will likely cause dependency installation to fail. It should probably be pinned to 1.1.11. Additionally, for consistency with the PR's goal of locking versions, consider also pinning other @langchain/* dependencies in this file, such as @langchain/community (line 68), @langchain/mcp-adapters (line 71), and @langchain/openai (line 72).
| "@langchain/core": "^.1.11", | |
| "@langchain/core": "1.1.11", |
| "@langchain/core": "1.1.11", | ||
| "@langchain/langgraph": "1.0.7", | ||
| "@langchain/openai": "^1.1.0", | ||
| "langchain": "^1.1.2", | ||
| "langchain": "1.1.6", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While you've pinned the versions for @langchain/core, @langchain/langgraph, and langchain, other langchain-related dependencies like @langchain/community (line 43) and @langchain/openai (line 46) are still using version ranges. For consistency and to fully achieve the goal of locking langchain versions, consider pinning these as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @plugin/langchain/package.json:
- Line 69: Update the invalid semver for the dependency key "@langchain/core":
replace the broken value "^.1.11" with the exact version string "1.1.11" so the
package.json contains "@langchain/core": "1.1.11" (matching the version used in
core/langchain-decorator).
🧹 Nitpick comments (1)
core/langchain-decorator/package.json (1)
44-45: Consider the implications of exact version pinning.The dependency updates correctly pin
@langchain/core,@langchain/langgraph, andlangchainto exact versions. This strategy eliminates automatic minor/patch updates and requires manual version bumps.While exact pinning provides reproducibility and prevents unexpected breaking changes, it also means:
- Security patches won't be automatically applied
- Bug fixes in patch releases require manual updates
- Maintenance overhead increases across the monorepo
Ensure your CI/CD pipeline includes automated dependency checks (e.g., Dependabot, Renovate) to monitor for updates and security advisories for these pinned versions.
Also applies to: 47-47
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
core/langchain-decorator/package.jsonplugin/langchain/package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Socket Security: Pull Request Alerts
🔇 Additional comments (1)
plugin/langchain/package.json (1)
70-70: Exact version pins are correctly formatted and verified on npm.The pinned versions exist on npm without security warnings:
@langchain/langgraph@1.0.7,langchain@1.1.6, and@langchain/core@1.1.11.
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @core/langchain-decorator/src/type/middleware.ts:
- Around line 1-10: The class TeggAgentMiddleware should not implement
createMiddlewareParams; remove the "implements createMiddlewareParams" clause,
explicitly type the name property as "name: string" on the class, and add an
abstract method (e.g., getMiddlewareConfig(): createMiddlewareParams) that
subclasses must implement to return the actual middleware config object to be
passed into createMiddleware; keep the constructor that assigns this.name =
this.constructor.name and update any callers to use
instance.getMiddlewareConfig() when constructing middleware via
createMiddleware.
In @core/langchain-decorator/test/utils.ts:
- Around line 116-156: FakeConfigurableModel.bindTools currently unsafely casts
this._chatModel to FakeToolCallingChatModel and will throw at runtime if
fields.model doesn't implement bindTools; update the constructor signature to
constrain fields.model to an interface that includes bindTools (or accept a
union with that interface) and use that typed property when creating
modelWithTools, or if you cannot change the type, add an explicit runtime check
in bindTools (e.g., verify typeof this._chatModel.bindTools === 'function') and
throw a clear error referencing FakeConfigurableModel.bindTools when bindTools
is missing before calling it.
- Around line 286-326: In MemorySaverAssertImmutable.put, ensure you fail fast
if config.configurable?.thread_id is missing (throw a clear error instead of
using the "undefined" bucket) and use the actual Uint8Array byte comparison
rather than decoding to strings: locate the thread_id usage in
MemorySaverAssertImmutable.put, validate and throw when thread_id is falsy,
ensure this.storageForCopies[thread_id] is initialized, and when comparing saved
vs stored copies (after calling this.serde.dumpsTyped(saved) and
this.serde.dumpsTyped(checkpoint)) compare the Uint8Array contents directly
(byte-by-byte or via length+every-byte equality) instead of using TextDecoder.
- Around line 158-285: The inline comment in
FakeToolCallingChatModel.withStructuredOutput incorrectly states the function
returns "{ raw: BaseMessage, parsed: RunOutput }" while the implementation
returns only this.structuredResponse (RunOutput); update or remove that
misleading comment to reflect the actual return shape (RunOutput only) so the
comment matches the code in the withStructuredOutput method of
FakeToolCallingChatModel.
🧹 Nitpick comments (2)
core/langchain-decorator/test/utils.ts (2)
36-114: Exported_AnyId*Messageis fine, but name the intent + tighten types a bit.
Since these are exported, consider dropping the leading_(reads “private”) and add explicit return types forlc_id+AnyStringmatcher methods to reduce inference drift across TS configs.
350-498:FakeToolCallingModelis serviceable, butcontentbuilding is a bit fragile.
The concatenation logic mixes multiple possibleBaseMessage['content']shapes; consider extracting it to a helper with explicit type guards to keep future changes safe (and easier to unit test).
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
core/langchain-decorator/src/type/middleware.tscore/langchain-decorator/test/utils.tsplugin/langchain/package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Socket Security: Pull Request Alerts
- GitHub Check: Runner-macos (18)
🔇 Additional comments (3)
core/langchain-decorator/test/utils.ts (2)
342-346:createCheckpointer()looks good for tests.
Simple and predictable default.
500-515:SearchAPItest tool is clean.
Clear schema + deterministic behavior.plugin/langchain/package.json (1)
69-75: Dependency pinning is correctly applied. The exact versions for@langchain/core(1.1.11),@langchain/langgraph(1.0.7), andlangchain(1.1.6) are properly pinned. Cross-workspace consistency is not a concern as these dependencies are isolated to theplugin/langchainworkspace only. Note that the repository does not currently have a lockfile (nopackage-lock.json,pnpm-lock.yaml, oryarn.lock); if lockfile generation is desired for deterministic installs, that would be a separate repository-wide initiative.
| import { createMiddleware } from 'langchain'; | ||
|
|
||
| type createMiddlewareParams = Parameters<typeof createMiddleware>['0']; | ||
|
|
||
| export abstract class TeggAgentMiddleware implements createMiddlewareParams { | ||
| name; | ||
| constructor() { | ||
| this.name = this.constructor.name; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
langchain createMiddleware API TypeScript type definition
💡 Result:
Signature (TypeScript, simplified):
import { AgentMiddleware } from "langchain";
declare function createMiddleware<
TSchema = undefined,
TContextSchema = undefined
(config: {
name: string;
stateSchema?: TSchema;
contextSchema?: TContextSchema;
beforeAgent?: BeforeAgentHook<TSchema, NormalizeContextSchema>;
afterAgent?: AfterAgentHook<TSchema, NormalizeContextSchema>;
beforeModel?: BeforeModelHook<TSchema, NormalizeContextSchema>;
afterModel?: AfterModelHook<TSchema, NormalizeContextSchema>;
wrapModelCall?: WrapModelCallHook<TSchema, NormalizeContextSchema>;
wrapToolCall?: WrapToolCallHook<TSchema, NormalizeContextSchema>;
tools?: any[];
}): AgentMiddleware<TSchema, TContextSchema, any>;
References: LangChain createMiddleware docs and AgentMiddleware/type refs. [1][2][3]
Sources
[1] createMiddleware docs. [2] AgentMiddleware interface/type. [3] middleware types (InferMiddlewareContext/State).
🏁 Script executed:
# Search the codebase for any imports or usage of createMiddleware
rg "createMiddleware" --type ts --type tsx -B 2 -A 2
# Look for langchain imports in the codebase
rg "from 'langchain'" --type ts --type tsx | head -20
# Check if there's a package.json to see langchain version
cat package.json | grep -A 5 -B 5 langchainRepository: eggjs/tegg
Length of output: 164
TypeScript implementation error: class cannot safely implement createMiddlewareParams and name is implicitly any.
The config object expected by createMiddleware requires a name: string property along with optional hooks (beforeAgent, afterAgent, beforeModel, afterModel, wrapModelCall, wrapToolCall, tools). Implementing this type on the class doesn't make semantic sense—it suggests instances are themselves valid middleware config objects, which they are not. The untyped name property will also fail under noImplicitAny.
Remove the implements clause, explicitly type name: string, and have subclasses provide the actual config object via a method if middleware parameters need to be constructed:
Suggested approach
import { createMiddleware } from 'langchain';
-type createMiddlewareParams = Parameters<typeof createMiddleware>['0'];
+export type CreateMiddlewareParams = Parameters<typeof createMiddleware>[0];
-export abstract class TeggAgentMiddleware implements createMiddlewareParams {
- name;
+export abstract class TeggAgentMiddleware {
+ public readonly name: string;
constructor() {
this.name = this.constructor.name;
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { createMiddleware } from 'langchain'; | |
| type createMiddlewareParams = Parameters<typeof createMiddleware>['0']; | |
| export abstract class TeggAgentMiddleware implements createMiddlewareParams { | |
| name; | |
| constructor() { | |
| this.name = this.constructor.name; | |
| } | |
| } | |
| import { createMiddleware } from 'langchain'; | |
| export type CreateMiddlewareParams = Parameters<typeof createMiddleware>[0]; | |
| export abstract class TeggAgentMiddleware { | |
| public readonly name: string; | |
| constructor() { | |
| this.name = this.constructor.name; | |
| } | |
| } |
🤖 Prompt for AI Agents
In @core/langchain-decorator/src/type/middleware.ts around lines 1 - 10, The
class TeggAgentMiddleware should not implement createMiddlewareParams; remove
the "implements createMiddlewareParams" clause, explicitly type the name
property as "name: string" on the class, and add an abstract method (e.g.,
getMiddlewareConfig(): createMiddlewareParams) that subclasses must implement to
return the actual middleware config object to be passed into createMiddleware;
keep the constructor that assigns this.name = this.constructor.name and update
any callers to use instance.getMiddlewareConfig() when constructing middleware
via createMiddleware.
| export class FakeConfigurableModel extends BaseChatModel { | ||
| _queuedMethodOperations: Record<string, any> = {}; | ||
|
|
||
| _chatModel: LanguageModelLike; | ||
|
|
||
| constructor( | ||
| fields: { | ||
| model: LanguageModelLike; | ||
| } & BaseChatModelParams, | ||
| ) { | ||
| super(fields); | ||
| this._chatModel = fields.model; | ||
| } | ||
|
|
||
| _llmType() { | ||
| return 'fake_configurable'; | ||
| } | ||
|
|
||
| async _generate( | ||
| // eslint-disable-next-line @typescript-eslint/no-unused-vars | ||
| _messages: BaseMessage[], | ||
| // eslint-disable-next-line @typescript-eslint/no-unused-vars | ||
| _options: this['ParsedCallOptions'], | ||
| // eslint-disable-next-line @typescript-eslint/no-unused-vars | ||
| _runManager?: CallbackManagerForLLMRun, | ||
| ): Promise<ChatResult> { | ||
| throw new Error('Not implemented'); | ||
| } | ||
|
|
||
| async _model() { | ||
| return this._chatModel; | ||
| } | ||
|
|
||
| bindTools(tools: BindToolsInput[]) { | ||
| const modelWithTools = new FakeConfigurableModel({ | ||
| model: (this._chatModel as FakeToolCallingChatModel).bindTools(tools), | ||
| }); | ||
| modelWithTools._queuedMethodOperations.bindTools = tools; | ||
| return modelWithTools; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FakeConfigurableModel.bindTools unsafe cast can explode at runtime.
(this._chatModel as FakeToolCallingChatModel).bindTools(...) will throw if the wrapped model isn’t that fake. Consider constraining fields.model to an interface that has bindTools, or throw a clearer error when missing.
🤖 Prompt for AI Agents
In @core/langchain-decorator/test/utils.ts around lines 116 - 156,
FakeConfigurableModel.bindTools currently unsafely casts this._chatModel to
FakeToolCallingChatModel and will throw at runtime if fields.model doesn't
implement bindTools; update the constructor signature to constrain fields.model
to an interface that includes bindTools (or accept a union with that interface)
and use that typed property when creating modelWithTools, or if you cannot
change the type, add an explicit runtime check in bindTools (e.g., verify typeof
this._chatModel.bindTools === 'function') and throw a clear error referencing
FakeConfigurableModel.bindTools when bindTools is missing before calling it.
| export class FakeToolCallingChatModel extends BaseChatModel { | ||
| sleep?: number = 50; | ||
|
|
||
| responses?: BaseMessage[]; | ||
|
|
||
| thrownErrorString?: string; | ||
|
|
||
| idx: number; | ||
|
|
||
| toolStyle: 'openai' | 'anthropic' | 'bedrock' | 'google' = 'openai'; | ||
|
|
||
| structuredResponse?: Record<string, unknown>; | ||
|
|
||
| // Track messages passed to structured output calls | ||
| structuredOutputMessages: BaseMessage[][] = []; | ||
|
|
||
| constructor( | ||
| fields: { | ||
| sleep?: number; | ||
| responses?: BaseMessage[]; | ||
| thrownErrorString?: string; | ||
| toolStyle?: 'openai' | 'anthropic' | 'bedrock' | 'google'; | ||
| structuredResponse?: Record<string, unknown>; | ||
| } & BaseChatModelParams, | ||
| ) { | ||
| super(fields); | ||
| this.sleep = fields.sleep ?? this.sleep; | ||
| this.responses = fields.responses; | ||
| this.thrownErrorString = fields.thrownErrorString; | ||
| this.idx = 0; | ||
| this.toolStyle = fields.toolStyle ?? this.toolStyle; | ||
| this.structuredResponse = fields.structuredResponse; | ||
| this.structuredOutputMessages = []; | ||
| } | ||
|
|
||
| _llmType() { | ||
| return 'fake'; | ||
| } | ||
|
|
||
| async _generate( | ||
| messages: BaseMessage[], | ||
| _options: this['ParsedCallOptions'], | ||
| runManager?: CallbackManagerForLLMRun, | ||
| ): Promise<ChatResult> { | ||
| if (this.thrownErrorString) { | ||
| throw new Error(this.thrownErrorString); | ||
| } | ||
| if (this.sleep !== undefined) { | ||
| await new Promise(resolve => setTimeout(resolve, this.sleep)); | ||
| } | ||
| const responses = this.responses?.length ? this.responses : messages; | ||
| const msg = responses[this.idx % responses.length]; | ||
| const generation: ChatResult = { | ||
| generations: [ | ||
| { | ||
| text: '', | ||
| message: msg, | ||
| }, | ||
| ], | ||
| }; | ||
| this.idx += 1; | ||
|
|
||
| if (typeof msg.content === 'string') { | ||
| await runManager?.handleLLMNewToken(msg.content); | ||
| } | ||
| return generation; | ||
| } | ||
|
|
||
| bindTools(tools: BindToolsInput[]): Runnable<any> { | ||
| const toolDicts = []; | ||
| const serverTools = []; | ||
| for (const tool of tools) { | ||
| if (!('name' in tool)) { | ||
| serverTools.push(tool); | ||
| continue; | ||
| } | ||
|
|
||
| // NOTE: this is a simplified tool spec for testing purposes only | ||
| if (this.toolStyle === 'openai') { | ||
| toolDicts.push({ | ||
| type: 'function', | ||
| function: { | ||
| name: tool.name, | ||
| }, | ||
| }); | ||
| } else if ([ 'anthropic', 'google' ].includes(this.toolStyle)) { | ||
| toolDicts.push({ | ||
| name: tool.name, | ||
| }); | ||
| } else if (this.toolStyle === 'bedrock') { | ||
| toolDicts.push({ | ||
| toolSpec: { | ||
| name: tool.name, | ||
| }, | ||
| }); | ||
| } | ||
| } | ||
| let toolsToBind: BindToolsInput[] = toolDicts; | ||
| if (this.toolStyle === 'google') { | ||
| toolsToBind = [{ functionDeclarations: toolDicts }]; | ||
| } | ||
| return this.withConfig({ | ||
| tools: [ ...toolsToBind, ...serverTools ], | ||
| } as BaseChatModelCallOptions); | ||
| } | ||
|
|
||
| withStructuredOutput< | ||
| RunOutput extends Record<string, any> = Record<string, any> | ||
| // eslint-disable-next-line @typescript-eslint/no-unused-vars | ||
| >(_: unknown): Runnable<any> { | ||
| if (!this.structuredResponse) { | ||
| throw new Error('No structured response provided'); | ||
| } | ||
| // Create a runnable that returns the proper structured format | ||
| return RunnableLambda.from(async (messages: BaseMessage[]) => { | ||
| if (this.sleep) { | ||
| await new Promise(resolve => setTimeout(resolve, this.sleep)); | ||
| } | ||
|
|
||
| // Store the messages that were sent to generate structured output | ||
| this.structuredOutputMessages.push([ ...messages ]); | ||
|
|
||
| // Return in the format expected: { raw: BaseMessage, parsed: RunOutput } | ||
| return this.structuredResponse as RunOutput; | ||
| }); | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
LangChain 1.1.6 BaseChatModel withStructuredOutput return shape documentation
💡 Result:
BaseChatModel.withStructuredOutput returns a Runnable whose output shape depends on the generic/config you pass:
- If you use the simple form it returns the parsed structured value: RunOutput (i.e., your parsed JSON/Pydantic/Zod result). [1]
- If you call the variant that requests the raw message, it returns an object { parsed: RunOutput; raw: BaseMessage } (the parsed output plus the raw model message). [1][2]
(Where RunOutput is the schema type you supplied — e.g., Record<string, any>, a Zod type, Pydantic model, or JSON Schema.) [1][2]
Sources: LangChain BaseChatModel / BaseChatOpenAI API docs. [1] https://v03.api.js.langchain.com/classes/_langchain_core.language_models_chat_models.BaseChatModel.html [2] https://v03.api.js.langchain.com/classes/_langchain_openai.BaseChatOpenAI.html
🏁 Script executed:
# Search the codebase for how withStructuredOutput is being used and what's expected
rg "withStructuredOutput" --type ts --type js -A 5 -B 2Repository: eggjs/tegg
Length of output: 1183
Remove misleading comment from withStructuredOutput(...) return statement.
The code correctly returns only this.structuredResponse (matching LangChain's simple form), but the comment claims { raw: BaseMessage, parsed: RunOutput }. Either remove the comment or update it to match the actual return shape (RunOutput only).
🤖 Prompt for AI Agents
In @core/langchain-decorator/test/utils.ts around lines 158 - 285, The inline
comment in FakeToolCallingChatModel.withStructuredOutput incorrectly states the
function returns "{ raw: BaseMessage, parsed: RunOutput }" while the
implementation returns only this.structuredResponse (RunOutput); update or
remove that misleading comment to reflect the actual return shape (RunOutput
only) so the comment matches the code in the withStructuredOutput method of
FakeToolCallingChatModel.
| export class MemorySaverAssertImmutable extends MemorySaver { | ||
| storageForCopies: Record<string, Record<string, Uint8Array>> = {}; | ||
|
|
||
| constructor() { | ||
| super(); | ||
| this.storageForCopies = {}; | ||
| } | ||
|
|
||
| async put( | ||
| config: RunnableConfig, | ||
| checkpoint: Checkpoint, | ||
| metadata: CheckpointMetadata, | ||
| ): Promise<RunnableConfig> { | ||
| const thread_id = config.configurable?.thread_id; | ||
| this.storageForCopies[thread_id] ??= {}; | ||
|
|
||
| // assert checkpoint hasn't been modified since last written | ||
| const saved = await this.get(config); | ||
| if (saved) { | ||
| const savedId = saved.id; | ||
| if (this.storageForCopies[thread_id][savedId]) { | ||
| const [, serializedSaved] = await this.serde.dumpsTyped(saved); | ||
| const serializedCopy = this.storageForCopies[thread_id][savedId]; | ||
|
|
||
| // Compare Uint8Array contents by converting to string | ||
| const savedStr = new TextDecoder().decode(serializedSaved); | ||
| const copyStr = new TextDecoder().decode(serializedCopy); | ||
| if (savedStr !== copyStr) { | ||
| throw new Error( | ||
| `Checkpoint [${savedId}] has been modified since last written`, | ||
| ); | ||
| } | ||
| } | ||
| } | ||
| const [ , serializedCheckpoint ] = await this.serde.dumpsTyped(checkpoint); | ||
| // save a copy of the checkpoint | ||
| this.storageForCopies[thread_id][checkpoint.id] = serializedCheckpoint; | ||
|
|
||
| return super.put(config, checkpoint, metadata); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thread_id can be undefined and collapse all copies into one bucket.
If config.configurable?.thread_id is missing, everything writes under the "undefined" key, weakening the immutability assertion. Consider throwing when absent, and compare bytes directly (not TextDecoder strings) to avoid encoding artifacts.
Possible adjustment
async put(
config: RunnableConfig,
checkpoint: Checkpoint,
metadata: CheckpointMetadata,
): Promise<RunnableConfig> {
const thread_id = config.configurable?.thread_id;
- this.storageForCopies[thread_id] ??= {};
+ if (!thread_id) throw new Error('Missing configurable.thread_id');
+ this.storageForCopies[thread_id] ??= {};
@@
- const savedStr = new TextDecoder().decode(serializedSaved);
- const copyStr = new TextDecoder().decode(serializedCopy);
- if (savedStr !== copyStr) {
+ if (
+ serializedSaved.length !== serializedCopy.length ||
+ serializedSaved.some((b, i) => b !== serializedCopy[i])
+ ) {
throw new Error(
`Checkpoint [${savedId}] has been modified since last written`,
);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export class MemorySaverAssertImmutable extends MemorySaver { | |
| storageForCopies: Record<string, Record<string, Uint8Array>> = {}; | |
| constructor() { | |
| super(); | |
| this.storageForCopies = {}; | |
| } | |
| async put( | |
| config: RunnableConfig, | |
| checkpoint: Checkpoint, | |
| metadata: CheckpointMetadata, | |
| ): Promise<RunnableConfig> { | |
| const thread_id = config.configurable?.thread_id; | |
| this.storageForCopies[thread_id] ??= {}; | |
| // assert checkpoint hasn't been modified since last written | |
| const saved = await this.get(config); | |
| if (saved) { | |
| const savedId = saved.id; | |
| if (this.storageForCopies[thread_id][savedId]) { | |
| const [, serializedSaved] = await this.serde.dumpsTyped(saved); | |
| const serializedCopy = this.storageForCopies[thread_id][savedId]; | |
| // Compare Uint8Array contents by converting to string | |
| const savedStr = new TextDecoder().decode(serializedSaved); | |
| const copyStr = new TextDecoder().decode(serializedCopy); | |
| if (savedStr !== copyStr) { | |
| throw new Error( | |
| `Checkpoint [${savedId}] has been modified since last written`, | |
| ); | |
| } | |
| } | |
| } | |
| const [ , serializedCheckpoint ] = await this.serde.dumpsTyped(checkpoint); | |
| // save a copy of the checkpoint | |
| this.storageForCopies[thread_id][checkpoint.id] = serializedCheckpoint; | |
| return super.put(config, checkpoint, metadata); | |
| } | |
| } | |
| export class MemorySaverAssertImmutable extends MemorySaver { | |
| storageForCopies: Record<string, Record<string, Uint8Array>> = {}; | |
| constructor() { | |
| super(); | |
| this.storageForCopies = {}; | |
| } | |
| async put( | |
| config: RunnableConfig, | |
| checkpoint: Checkpoint, | |
| metadata: CheckpointMetadata, | |
| ): Promise<RunnableConfig> { | |
| const thread_id = config.configurable?.thread_id; | |
| if (!thread_id) throw new Error('Missing configurable.thread_id'); | |
| this.storageForCopies[thread_id] ??= {}; | |
| // assert checkpoint hasn't been modified since last written | |
| const saved = await this.get(config); | |
| if (saved) { | |
| const savedId = saved.id; | |
| if (this.storageForCopies[thread_id][savedId]) { | |
| const [, serializedSaved] = await this.serde.dumpsTyped(saved); | |
| const serializedCopy = this.storageForCopies[thread_id][savedId]; | |
| // Compare Uint8Array contents by converting to string | |
| if ( | |
| serializedSaved.length !== serializedCopy.length || | |
| serializedSaved.some((b, i) => b !== serializedCopy[i]) | |
| ) { | |
| throw new Error( | |
| `Checkpoint [${savedId}] has been modified since last written`, | |
| ); | |
| } | |
| } | |
| } | |
| const [ , serializedCheckpoint ] = await this.serde.dumpsTyped(checkpoint); | |
| // save a copy of the checkpoint | |
| this.storageForCopies[thread_id][checkpoint.id] = serializedCheckpoint; | |
| return super.put(config, checkpoint, metadata); | |
| } | |
| } |
🤖 Prompt for AI Agents
In @core/langchain-decorator/test/utils.ts around lines 286 - 326, In
MemorySaverAssertImmutable.put, ensure you fail fast if
config.configurable?.thread_id is missing (throw a clear error instead of using
the "undefined" bucket) and use the actual Uint8Array byte comparison rather
than decoding to strings: locate the thread_id usage in
MemorySaverAssertImmutable.put, validate and throw when thread_id is falsy,
ensure this.storageForCopies[thread_id] is initialized, and when comparing saved
vs stored copies (after calling this.serde.dumpsTyped(saved) and
this.serde.dumpsTyped(checkpoint)) compare the Uint8Array contents directly
(byte-by-byte or via length+every-byte equality) instead of using TextDecoder.
Checklist
npm testpassesAffected core subsystem(s)
Description of change
Summary by CodeRabbit
Chores
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.